critical state
LLM-Guided Reinforcement Learning: Addressing Training Bottlenecks through Policy Modulation
While reinforcement learning (RL) has achieved notable success in various domains, training effective policies for complex tasks remains challenging. Agents often converge to local optima and fail to maximize long-term rewards. Existing approaches to mitigate training bottlenecks typically fall into two categories: (i) Automated policy refinement, which identifies critical states from past trajectories to guide policy updates, but suffers from costly and uncertain model training; and (ii) Human-in-the-loop refinement, where human feedback is used to correct agent behavior, but this does not scale well to environments with large or continuous action spaces. In this work, we design a large language model-guided policy modulation framework that leverages LLMs to improve RL training without additional model training or human intervention. We first prompt an LLM to identify critical states from a sub-optimal agent's trajectories. Based on these states, the LLM then provides action suggestions and assigns implicit rewards to guide policy refinement. Experiments across standard RL benchmarks demonstrate that our method outperforms state-of-the-art baselines, highlighting the effectiveness of LLM-based explanations in addressing RL training bottlenecks.
Distilling Realizable Students from Unrealizable Teachers
Kim, Yujin, Chin, Nathaniel, Vasudev, Arnav, Choudhury, Sanjiban
-- We study policy distillation under privileged information, where a student policy with only partial observations must learn from a teacher with full-state access. A key challenge is information asymmetry: the student cannot directly access the teacher's state space, leading to distributional shifts and policy degradation. Existing approaches either modify the teacher to produce realizable but sub-optimal demonstrations or rely on the student to explore missing information independently, both of which are inefficient. Our key insight is that the student should strategically interact with the teacher --querying only when necessary and resetting from recovery states --to stay on a recoverable path within its own observation space. We introduce two methods: (i) an imitation learning approach that adaptively determines when the student should query the teacher for corrections, and (ii) a reinforcement learning approach that selects where to initialize training for efficient exploration. The project website is available here. Robots operating in the real world must learn to act effectively despite partial observations and limited ability to explore. Unlike in simulation, where policies have access to privileged state information, real-world policies must make decisions based on incomplete inputs [1]-[3].
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.47)
Adaptive Querying for Reward Learning from Human Feedback
Anand, Yashwanthi, Saisubramanian, Sandhya
Learning from human feedback is a popular approach to train robots to adapt to user preferences and improve safety. Existing approaches typically consider a single querying (interaction) format when seeking human feedback and do not leverage multiple modes of user interaction with a robot. We examine how to learn a penalty function associated with unsafe behaviors, such as side effects, using multiple forms of human feedback, by optimizing the query state and feedback format. Our framework for adaptive feedback selection enables querying for feedback in critical states in the most informative format, while accounting for the cost and probability of receiving feedback in a certain format. We employ an iterative, two-phase approach which first selects critical states for querying, and then uses information gain to select a feedback format for querying across the sampled critical states. Our evaluation in simulation demonstrates the sample efficiency of our approach.
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.47)
Crafting desirable climate trajectories with RL explored socio-environmental simulations
Rudd-Jones, James, Thendean, Fiona, Pérez-Ortiz, María
Climate change poses an existential threat, necessitating effective climate policies to enact impactful change. Decisions in this domain are incredibly complex, involving conflicting entities and evidence. In the last decades, policymakers increasingly use simulations and computational methods to guide some of their decisions. Integrated Assessment Models (IAMs) are one of such methods, which combine social, economic, and environmental simulations to forecast potential policy effects. For example, the UN uses outputs of IAMs for their recent Intergovernmental Panel on Climate Change (IPCC) reports. Traditionally these have been solved using recursive equation solvers, but have several shortcomings, e.g. struggling at decision making under uncertainty. Recent preliminary work using Reinforcement Learning (RL) to replace the traditional solvers shows promising results in decision making in uncertain and noisy scenarios. We extend on this work by introducing multiple interacting RL agents as a preliminary analysis on modelling the complex interplay of socio-interactions between various stakeholders or nations that drives much of the current climate crisis. Our findings show that cooperative agents in this framework can consistently chart pathways towards more desirable futures in terms of reduced carbon emissions and improved economy. However, upon introducing competition between agents, for instance by using opposing reward functions, desirable climate futures are rarely reached. Modelling competition is key to increased realism in these simulations, as such we employ policy interpretation by visualising what states lead to more uncertain behaviour, to understand algorithm failure. Finally, we highlight the current limitations and avenues for further work to ensure future technology uptake for policy derivation.
- Asia > Singapore (0.14)
- Europe > United Kingdom (0.14)
- Government (1.00)
- Energy > Oil & Gas (0.88)
Acceleration method for generating perception failure scenarios based on editing Markov process
With the rapid advancement of autonomous driving technology, self-driving cars have become a central focus in the development of future transportation systems. Scenario generation technology has emerged as a crucial tool for testing and verifying the safety performance of autonomous driving systems. Current research in scenario generation primarily focuses on open roads such as highways, with relatively limited studies on underground parking garages. The unique structural constraints, insufficient lighting, and high-density obstacles in underground parking garages impose greater demands on the perception systems, which are critical to autonomous driving technology. This study proposes an accelerated generation method for perception failure scenarios tailored to the underground parking garage environment, aimed at testing and improving the safety performance of autonomous vehicle (AV) perception algorithms in such settings. The method presented in this paper generates an intelligent testing environment with a high density of perception failure scenarios by learning the interactions between background vehicles (BVs) and autonomous vehicles (AVs) within perception failure scenarios. Furthermore, this method edits the Markov process within the perception failure scenario data to increase the density of critical information in the training data, thereby optimizing the learning and generation of perception failure scenarios. A simulation environment for an underground parking garage was developed using the Carla and Vissim platforms, with Bevfusion employed as the perception algorithm for testing. The study demonstrates that this method can generate an intelligent testing environment with a high density of perception failure scenarios and enhance the safety performance of perception algorithms within this experimental setup.
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.61)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
Residual resampling-based physics-informed neural network for neutron diffusion equations
Zhang, Heng, He, Yun-Ling, Liu, Dong, Hang, Qin, Yao, He-Min, Xiang, Di
The neutron diffusion equation plays a pivotal role in the analysis of nuclear reactors. Nevertheless, employing the Physics-Informed Neural Network (PINN) method for its solution entails certain limitations. Traditional PINN approaches often utilize fully connected network (FCN) architecture, which is susceptible to overfitting, training instability, and gradient vanishing issues as the network depth increases. These challenges result in accuracy bottlenecks in the solution. In response to these issues, the Residual-based Resample Physics-Informed Neural Network(R2-PINN) is proposed, which proposes an improved PINN architecture that replaces the FCN with a Convolutional Neural Network with a shortcut(S-CNN), incorporating skip connections to facilitate gradient propagation between network layers. Additionally, the incorporation of the Residual Adaptive Resampling (RAR) mechanism dynamically increases sampling points, enhancing the spatial representation capabilities and overall predictive accuracy of the model. The experimental results illustrate that our approach significantly improves the model's convergence capability, achieving high-precision predictions of physical fields. In comparison to traditional FCN-based PINN methods, R2-PINN effectively overcomes the limitations inherent in current methods, providing more accurate and robust solutions for neutron diffusion equations.
RICE: Breaking Through the Training Bottlenecks of Reinforcement Learning with Explanation
Cheng, Zelei, Wu, Xian, Yu, Jiahao, Yang, Sabrina, Wang, Gang, Xing, Xinyu
Deep reinforcement learning (DRL) is playing an increasingly important role in real-world applications. However, obtaining an optimally performing DRL agent for complex tasks, especially with sparse rewards, remains a significant challenge. The training of a DRL agent can be often trapped in a bottleneck without further progress. In this paper, we propose RICE, an innovative refining scheme for reinforcement learning that incorporates explanation methods to break through the training bottlenecks. The high-level idea of RICE is to construct a new initial state distribution that combines both the default initial states and critical states identified through explanation methods, thereby encouraging the agent to explore from the mixed initial states. Through careful design, we can theoretically guarantee that our refining scheme has a tighter sub-optimality bound. We evaluate RICE in various popular RL environments and real-world applications. The results demonstrate that RICE significantly outperforms existing refining schemes in enhancing agent performance.
- Europe > Austria > Vienna (0.14)
- North America > United States > Illinois > Cook County > Evanston (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Materials > Metals & Mining (0.67)
- Government > Military > Cyberwarfare (0.46)
Interpretable Modeling of Deep Reinforcement Learning Driven Scheduling
Li, Boyang, Lan, Zhiling, Papka, Michael E.
In the field of high-performance computing (HPC), there has been recent exploration into the use of deep reinforcement learning for cluster scheduling (DRL scheduling), which has demonstrated promising outcomes. However, a significant challenge arises from the lack of interpretability in deep neural networks (DNN), rendering them as black-box models to system managers. This lack of model interpretability hinders the practical deployment of DRL scheduling. In this work, we present a framework called IRL (Interpretable Reinforcement Learning) to address the issue of interpretability of DRL scheduling. The core idea is to interpret DNN (i.e., the DRL policy) as a decision tree by utilizing imitation learning. Unlike DNN, decision tree models are non-parametric and easily comprehensible to humans. To extract an effective and efficient decision tree, IRL incorporates the Dataset Aggregation (DAgger) algorithm and introduces the notion of critical state to prune the derived decision tree. Through trace-based experiments, we demonstrate that IRL is capable of converting a black-box DNN policy into an interpretable rulebased decision tree while maintaining comparable scheduling performance. Additionally, IRL can contribute to the setting of rewards in DRL scheduling.
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- North America > United States > Florida > Orange County > Orlando (0.04)
- Information Technology (1.00)
- Transportation (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.90)
LADRI: LeArning-based Dynamic Risk Indicator in Automated Driving System
Patel, Anil Ranjitbhai, Liggesmeyer, Peter
As the horizon of intelligent transportation expands with the evolution of Automated Driving Systems (ADS), ensuring paramount safety becomes more imperative than ever. Traditional risk assessment methodologies, primarily crafted for human-driven vehicles, grapple to adequately adapt to the multifaceted, evolving environments of ADS. This paper introduces a framework for real-time Dynamic Risk Assessment (DRA) in ADS, harnessing the potency of Artificial Neural Networks (ANNs). Our proposed solution transcends these limitations, drawing upon ANNs, a cornerstone of deep learning, to meticulously analyze and categorize risk dimensions using real-time On-board Sensor (OBS) data. This learning-centric approach not only elevates the ADS's situational awareness but also enriches its understanding of immediate operational contexts. By dissecting OBS data, the system is empowered to pinpoint its current risk profile, thereby enhancing safety prospects for onboard passengers and the broader traffic ecosystem. Through this framework, we chart a direction in risk assessment, bridging the conventional voids and enhancing the proficiency of ADS. By utilizing ANNs, our methodology offers a perspective, allowing ADS to adeptly navigate and react to potential risk factors, ensuring safer and more informed autonomous journeys.
- Europe > Germany > Rhineland-Palatinate > Kaiserslautern (0.06)
- Europe > Germany > Rhineland-Palatinate > Landau (0.05)
- Europe > Sweden > Västmanland County > Västerås (0.04)
- Automobiles & Trucks (1.00)
- Information Technology > Security & Privacy (0.81)
- Transportation > Ground > Road (0.61)
- Information Technology > Robotics & Automation (0.61)
Learning to Identify Critical States for Reinforcement Learning from Videos
Liu, Haozhe, Zhuge, Mingchen, Li, Bing, Wang, Yuhui, Faccio, Francesco, Ghanem, Bernard, Schmidhuber, Jürgen
Recent work on deep reinforcement learning (DRL) has pointed out that algorithmic information about good policies can be extracted from offline data which lack explicit information about executed actions. For example, videos of humans or robots may convey a lot of implicit information about rewarding action sequences, but a DRL machine that wants to profit from watching such videos must first learn by itself to identify and recognize relevant states/actions/rewards. Without relying on ground-truth annotations, our new method called Deep State Identifier learns to predict returns from episodes encoded as videos. Then it uses a kind of mask-based sensitivity analysis to extract/identify important critical states. Extensive experiments showcase our method's potential for understanding and improving agent behavior. The source code and the generated datasets are available at https://github.com/AI-Initiative-KAUST/VideoRLCS.
- North America > United States (0.14)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Europe > Austria (0.04)
- (2 more...)